197 research outputs found

    Binding tactile and visual sensations via unique association by cross-anchoring between double-touching and self-occlusion

    Get PDF
    Binding is one of the most fundamental cognitive functions, how to find the correspondence of sensations between different modalities such as vision and touch. Without a priori knowledge on this correspondence, binding is regarded to be a formidable issue for a robot since it often perceives multiple physical phenomena in its different modal sensors, therefore it should correctly match the foci of attention in different modalities that may have multiple correspondences each other. We suppose that learning the multimodal representation of the body should be the first step toward binding since the morphological constraints in self-body-observation would make the binding problem tractable. The multimodal sensations are expected to be constrained in perceiving own body so as to configurate the unique parts of the multiple correspondence reflecting its morphology. In this paper, we propose a method to match the foci of attention in vision and touch through the unique association by cross-anchoring different modalities. Simple experiments show the validity of the proposed method

    A Constructive Model of Mother-Infant Interaction towards Infant’s Vowel Articulation

    Get PDF
    Human infants seem to develop to acquire common phonemes to adults without the capability to articulate or any explicit knowledge. To understand such unrevealed human cognitive development, building a robot which reproduces such a developmental process seems effective. It will also contribute to a design principle for a robot that can communicate with human beings. This paper hypothesizes that the caregiver’s parrotry to the coo of the robot plays an important role in the phoneme acquisition process based on the implication from behavioral studies, and propose a constructive model for it. We validate the proposed model by examining whether a real robot can acquire Japanese vowels through interactions with its caregiver

    Show, Attend and Interact: Perceivable Human-Robot Social Interaction through Neural Attention Q-Network

    Full text link
    For a safe, natural and effective human-robot social interaction, it is essential to develop a system that allows a robot to demonstrate the perceivable responsive behaviors to complex human behaviors. We introduce the Multimodal Deep Attention Recurrent Q-Network using which the robot exhibits human-like social interaction skills after 14 days of interacting with people in an uncontrolled real world. Each and every day during the 14 days, the system gathered robot interaction experiences with people through a hit-and-trial method and then trained the MDARQN on these experiences using end-to-end reinforcement learning approach. The results of interaction based learning indicate that the robot has learned to respond to complex human behaviors in a perceivable and socially acceptable manner.Comment: 7 pages, 5 figures, accepted by IEEE-RAS ICRA'1

    Robot-on-Robot Gossiping to Improve Sense of Human-Robot Conversation

    Full text link
    S. Mitsuno, Y. Yoshikawa and H. Ishiguro, "Robot-on-Robot Gossiping to Improve Sense of Human-Robot Conversation," 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy, 2020, pp. 653-658, doi: 10.1109/RO-MAN47096.2020.9223442.The 29th IEEE International Conference on Robot & Human Interactive Communication [31 AUG - 04 SEPT, 2020

    Multiple-Robot Mediated Discussion System to support group discussion ∗

    Full text link
    S. Ikari, Y. Yoshikawa and H. Ishiguro, "Multiple-Robot Mediated Discussion System to support group discussion *," 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy, 2020, pp. 495-502, doi: 10.1109/RO-MAN47096.2020.9223444.The 29th IEEE International Conference on Robot & Human Interactive Communication [31 AUG - 04 SEPT, 2020

    Body Scheme Acquisition by Cross Modal Map Learning among Tactile, Visual, and Proprioceptive Spaces

    Get PDF
    How to represent own body is one of the most interesting issues in cognitive developmental robotics which aims to understand the cognitive developmental processes that an intelligent robot would require and how to realize them in a physical entity. This paper presents a cognitive model how the robot acquires its own body representation, that is body scheme for the body surface. The internal observer assumption makes it difficult for a robot to associate the sensory information from different modalities because of the lacking of references between them that are usually given by the designer in the prenatal stage of the robot. Our model is based on cross-modal map learning among join, vision, and tactile sensor spaces by associating different pairs of sensor values when they are activated simultaneously. We show a preliminary experiment, and then discuss how our model can explain the reported phenomenon on body scheme and future issues

    Ostensive-Cue Sensitive Learning and Exclusive Evaluation of Policies: A Solution for Measuring Contingency of Experiences for Social Developmental Robot

    Get PDF
    Joint attention related behaviors (JARBs) are some of the most important and basic cognitive functions for establishing successful communication in human interaction. It is learned gradually during the infant's developmental process, and enables the infant to purposefully improve his/her interaction with the others. To adopt such a developmental process for building an adaptive and social robot, previous studies proposed several contingency evaluation methods, by which an infant robot becomes able to sequentially learn some primary social skills. These skills included gaze following and social referencing, and could be acquired through interacting with a human caregiver model in a computer simulation. However, to implement such methods to a real-world robot, two major problems, that were not addressed in the previous research, have remained unresearched: (1) dependency of histogram of the observed events by the robot to each other, which increases the error of the internal calculation and consequently decreases the accuracy of contingency evaluation; and (2) unsynchronized teaching/learning phase of the teaching-caregiver and the learning-robot, which leads the robot and the caregiver not to understand the suitable timing for the learning and the teaching, respectively. In this paper, we address these two problems, and propose two algorithms in order to solve them: (1) exclusive evaluation of policies (XEP) for the former, and (2) ostensive-cue sensitive learning (OsL) for the latter. To show the effect of the proposed algorithms, we conducted a real-world human-robot interaction experiment with 48 subjects, and compared the performance of the learning robot with/without proposed algorithms. Our results show that adopting proposed algorithms improves the robot's performance in terms of learning efficiency, complexity of the learned behaviors, predictability of the robot, and even the result of the subjective evaluation of the participants about the intelligence of the robot as well as the quality of the interaction
    • …
    corecore